skip to main content


Search for: All records

Creators/Authors contains: "Kondrakunta, Sravya"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Autonomous agents in a multi-agent system work with each other to achieve their goals. However, In a partially observable world, current multi-agent systems are often less effective in achieving their goals. This limitation is due to the agents’ lack of reasoning about other agents and their mental states. Another factor is the agents’ inability to share required knowledge with other agents. This paper addresses the limitations by presenting a general approach for autonomous agents to work together in a multi-agent system. In this approach, an agent applies two main concepts: goal reasoning- to determine what goals to pursue and share; Theory of mind-to select an agent(s) for sharing goals and knowledge. We evaluate the performance of our multi-agent system in a Marine Life Survey Domain and compare it to another multi-agent system that randomly selects agent(s) to delegates its goals. 
    more » « less
  2. Goal management in autonomous agents has been a problem of interest for a long time. Multiple goal operations are required to solve an agent goal management problem. For example, some goal operations include selection, change, formulation, delegation, monitoring. Many researchers from different fields developed several solution approaches with an implicit or explicit focus on goal operations. For example, some solution approaches include scheduling the agents’ goals, performing cost-benefit analysis to select/organize goals, agent goal formulation in unexpected situations. However, none of them explicitly shed light on the agents’ response when multiple goal operations occur simultaneously. This paper develops an algorithm to address agent goal management when multiple-goal operations co-occur and presents how such an interaction would improve agent goal management in different domains. 
    more » « less
  3. Computational metacognition represents a cognitive systems perspective on high-order reasoning in integrated artificial systems that seeks to leverage ideas from human metacognition and from metareasoning approaches in artificial intelligence. The key characteristic is to declaratively represent and then monitor traces of cognitive activity in an intelligent system in order to manage the performance of cognition itself. Improvements in cognition then lead to improvements in behavior and thus performance. We illustrate these concepts with an agent implementation in a cognitive architecture called MIDCA and show the value of metacognition in problem-solving. The results illustrate how computational metacognition improves performance by changing cognition through meta-level goal operations and learning. 
    more » « less
  4. Goal reasoning agents can solve novel problems by detecting an anomaly between expectations and observations, generating explanations about plausible causes for the anomaly, and formulating goals to remove the cause. Yet, not all anomalies represent problems. This paper addresses discerning the difference between benign anomalies and those that represent an actual problem for an agent. Furthermore, we present a new definition of the term “problem” in a goal reasoning context. This paper discusses the role of explanations and goal formulation in response to the developing problems and implements it; the paper also illustrates the above in a mine clearance domain and a labor relations domain. We also show the empirical difference between a standard planning agent, an agent that detects anomalies, and an agent that recognizes problems. 
    more » « less
  5. An intelligent agent has many tasks and goals to achieve over specific time intervals. The goals may be assigned to it or the agent may generate its own goals. In either case, the number of goals at any given time may exceed its capacity to act upon concurrently. Therefore, an agent must prioritize the goals in chronological order as per their relative importance or significance. We show how an intelligent agent can estimate the trade-off between performance gains and resource costs to make smart choices concerning the goals it intends to achieve as opposed to selecting them in an arbitrary basis. We illustrate this method within the context of an intelligent cognitive architecture that supports various agent models. 
    more » « less
  6. Intelligent physical systems as embodied cognitive systems must perform high-level reasoning while concurrently managing an underlying control architecture. The link between cognition and control must manage the problem of converting continuous values from the real world to symbolic representations (and back). To generate effective behaviors, reasoning must include a capacity to replan, acquire and update new information, detect and respond to anomalies, and perform various operations on system goals. But, these processes are not independent and need further exploration. This paper examines an agent’s choices when multiple goal operations co-occur and interact, and it establishes a method of choosing between them. We demonstrate the benefits and discuss the trade offs involved with this and show positive results in a dynamic marine search task. 
    more » « less
  7. Barták, Roman ; Bell, Eric (Ed.)
    Autonomous agents often have sufficient resources to achieve the goals that are provided to them. However, in dynamic worlds where unexpected problems are bound to occur, an agent may formulate new goals with further resource requirements. Thus, agents should be smart enough to man-age their goals and the limited resources they possess in an effective and flexible manner. We present an approach to the selection and monitoring of goals using resource estimation and goal priorities. To evaluate our approach, we designed an experiment on top of our previous work in a complex mine-clearance domain. The agent in this domain formulates its own goals by retrieving a case to explain uncovered discrepancies and generating goals from the explanation. Finally, we compare the performance of our approach to two alternatives. 
    more » « less
  8. Cox, Michael T. (Ed.)
    Goal reasoning agents can solve novel problems by detecting an anomaly between expectations and observations; generating explanations about plausible causes for the anomaly; and formulating goals to remove the cause. Yet not all anomalies represent problems. We claim that the task of discerning the difference between benign anomalies and those that represent an actual problem by an agent will increase its performance. Furthermore, we present a new definition of the term “problem” in a goal reasoning context. This paper discusses the role of explanations and goal formulation in response to developing problems and implements the response. The paper illustrates goal formulation in a mine clearance domain and a labor relations domain. We also show the empirical difference between a standard planning agent, an agent that detects anomalies and an agent that recognizes problems. 
    more » « less